OpenAI, the artificial intelligence research organization behind the popular ChatGPT language model, is reportedly considering developing its own AI chips to overcome the challenges of high costs and limited supply of existing processors. According to Reuters, OpenAI has been exploring various options to advance its chip ambitions, including acquiring an AI chip manufacturer or designing chips internally.
OpenAI is one of the best-funded AI startups in the industry, having raised more than $11 billion in venture capital. It is also nearing $1 billion in annual revenue and is valued at $90 billion in the secondary market, according to a recent Wall Street Journal report. The company aims to create artificial general intelligence (AGI), a hypothetical form of AI that can perform any intellectual task that humans can.
One of the main products of OpenAI is ChatGPT, a large-scale language model that can generate coherent and fluent text on various topics and tasks. ChatGPT is powered by a massive supercomputer provided by Microsoft, which uses 10,000 Nvidia A100 GPUs. GPUs are the dominant type of processors for AI training and inference, as they can perform many parallel computations efficiently.
However, GPUs are also very expensive and in short supply, as the demand for AI applications has skyrocketed in recent years. Nvidia, which holds more than 80% of the global AI processor market share, according to Reuters, has said that its best-performing AI chips are sold out until 2024. Microsoft has also warned that it is facing a shortage of server hardware needed to run AI services, which could lead to service disruptions.
The operation costs for ChatGPT alone are enormous; an analysis by Bernstein analyst Stacy Rasgon found that if ChatGPT queries grew to a tenth the scale of Google Search, it would require roughly $48.1 billion worth of GPUs initially and about $16 billion worth of chips a year to keep operational.
OpenAI’s CEO Sam Altman has made the acquisition of more AI chips a top priority for the company, Reuters reports. He has also voiced his concerns about the shortage of GPUs and the high expenses involved in running AI software on such platforms.
To address these challenges, OpenAI is exploring plans to build its own AI chips that would be tailored to its specific needs and goals. The company has not yet committed to this plan, as it would entail significant investment and time. Designing a custom chip could take years and cost hundreds of millions of dollars annually. Acquiring an existing chip company could be easier and faster, but there is no guarantee of success.
OpenAI would not be the first tech giant to pursue creating its own AI chips. Google has developed its own processor, the tensor processing unit (TPU), to train and run large-scale AI systems like BERT and AlphaGo. Amazon offers proprietary chips to AWS customers for both training (Inferentia) and inference (Graviton). Microsoft, which is an investor and partner of OpenAI, is working with AMD to develop an in-house AI chip called Athena, which OpenAI is said to be testing.
By developing its own AI chips, OpenAI could gain more control over its hardware and software stack, reduce its dependence on external suppliers, optimize its performance and efficiency, and lower its costs. However, it could also face technical and logistical challenges, as well as potential competition from other chipmakers.
The decision to build or buy its own AI chips could have a significant impact on OpenAI’s future direction and vision. As the company strives to create AGI and democratize access to AI, it will need to balance its ambitious goals with its practical constraints.
Add a Comment: